This tutorial uses data and reproduces a subset of analyses reported
in the following manuscript:
Mesquiti & Seraj.
(Preprint) The Psychological Impacts of the COVID-19 Pandemic on
Corporate Leadership
You can find this project’s github here.
The COVID-19 pandemic sent shockwaves across the fabric of our
society. Examining the impact of the pandemic on business leadership is
particularly important to understanding how this event affected their
decision-making. The present study documents the psychological effects
of the COVID-19 pandemic on chief executive officers (CEOs). This was
accomplished by analyzing CEOs’ language from quarterly earnings calls
(N = 19,536) for a year before and after lockdown. CEOs had large shifts
in language in the months immediately following the start of the
pandemic lockdowns. Analytic thinking plummeted after the world went
into lockdown, with CEOs’ language becoming less technical and more
personal and intuitive. In parallel, CEOs’ language showed signs of
increased cognitive load, as they were processing the effect of the
pandemic on their business practices. Business leaders’ use of
collective-focused language (we-usage) dropped substantially after the
pandemic began, perhaps suggesting CEOs felt disconnected from their
companies. Self-focused (I-usage) language increased, showing the
increased preoccupation of business leaders. The size of the observed
shifts in language during the pandemic also dwarfed responses to other
events that occurred dating back to 2010, with the effect lasting around
seven months.
Prep data
Load necessary packages and set Working Directory
if (!require("pacman")) install.packages("pacman")
pacman::p_load(tidyverse,zoo,lubridate,plotrix,ggpubr, caret, broom, kableExtra, reactable, effsize, install = T)
setwd("~/Desktop/Language_Lab_Repro")
Define aesthetics
palette_map = c("#3B9AB2", "#EBCC2A", "#F21A00")
palette_condition = c("#ee9b00", "#bb3e03", "#005f73")
plot_aes = theme_classic() +
theme(text = element_text(size = 16, family = "Futura Medium")) +
theme(axis.text.x=element_text(angle=45, hjust=1)) +
theme(plot.title.position = 'plot',
plot.title = element_text(hjust = 0.5, face = "bold", size = 16)) +
theme(axis.text=element_text(size=16),
axis.title=element_text(size=20,face="bold"))+
theme(plot.title.position = 'plot',
plot.title = element_text(hjust = 0.5, face = "bold", size = 20)) +
theme(axis.text=element_text(size = 14),
axis.title=element_text(size = 20,face="bold"))
Write our Table Funcions
baseline_ttest <- function(ttest_list) {
# Extract relevant information from each test and store in a data frame
ttest_df <- data.frame(
Group1 = seq(0,0,1),
Group2 = seq(1,24,1),
t = sapply(ttest_list, function(x) x$statistic),
df = sapply(ttest_list, function(x) x$parameter),
p_value = sapply(ttest_list, function(x) x$p.value)
)
# Format p-values as scientific notation
ttest_df$p_value <- format(ttest_df$p_value, scientific = T)
# Rename columns
colnames(ttest_df) <- c("t", "t + 1 ", "t-value", "Degrees of Freedom", "p-value")
# Create table using kableExtra
kable(ttest_df, caption = "Summary of Welch's t-Tests", booktabs = TRUE) %>%
kableExtra::kable_styling()
}
post_pandemic_summary <- function(ttest_list) {
# Extract relevant information from each test and store in a data frame
ttest_df <- data.frame(
Group1 = seq(12,23,1),
Group2 = seq(13,24,1),
t = sapply(ttest_list, function(x) x$statistic),
df = sapply(ttest_list, function(x) x$parameter),
p_value = sapply(ttest_list, function(x) x$p.value)
)
# Format p-values as scientific notation
ttest_df$p_value <- format(ttest_df$p_value, scientific = T)
# Rename columns
colnames(ttest_df) <- c("t", "t + 1 ", "t-value", "Degrees of Freedom", "p-value")
# Create table using kableExtra
kable(ttest_df, caption = "Summary of Welch's t-Tests", booktabs = TRUE) %>%
kableExtra::kable_styling()
}
baseline_cohen_d <- function(cohen_d_list) {
# Extract relevant information from each test and store in a data frame
cohen_d_df <- data.frame(
Group1 = seq(0,0,1),
Group2 = seq(1,24,1),
Cohen_d = sapply(cohen_d_list, function(x) x$estimate)
)
# Rename columns
colnames(cohen_d_df) <- c("t", "t + 1", "Cohen's d")
# Create table using kableExtra
kable(cohen_d_df, caption = "Summary of Cohen's D", booktabs = TRUE) %>%
kableExtra::kable_styling()
}
post_cohen_d <- function(cohen_d_list) {
# Extract relevant information from each test and store in a data frame
cohen_d_df <- data.frame(
Group1 = seq(12,23,1),
Group2 = seq(13,24,1),
Cohen_d = sapply(cohen_d_list, function(x) x$estimate)
)
# Rename columns
colnames(cohen_d_df) <- c("t", "t+1", "Cohen's d")
# Create table using kableExtra
kable(cohen_d_df, caption = "Summary of Cohen's D", booktabs = TRUE) %>%
kableExtra::kable_styling()
}
baseline_mean_diff <- function(mean_diff_list) {
# Extract relevant information from each mean difference calculation and store in a data frame
mean_diff_df <- data.frame(
Group1 = seq(0,0,1),
Group2 = seq(1,24,1),
mean_diff = mean_diff_list
)
# Rename columns
colnames(mean_diff_df) <- c("t", "t+1", "Mean Difference")
# Create table using kableExtra
kable(mean_diff_df, caption = "Summary of Mean Differences", booktabs = TRUE) %>%
kableExtra::kable_styling()
}
post_mean_diff <- function(mean_diff_list) {
# Extract relevant information from each mean difference calculation and store in a data frame
mean_diff_df <- data.frame(
Group1 = seq(12,23,1),
Group2 = seq(13,24,1),
mean_diff = mean_diff_list
)
# Rename columns
colnames(mean_diff_df) <- c("t", "t+1", "Mean Difference")
# Create table using kableExtra
kable(mean_diff_df, caption = "Summary of Mean Differences", booktabs = TRUE) %>%
kableExtra::kable_styling()
}
Load in the Data
data <- read_csv("https://raw.githubusercontent.com/scm1210/Language_Lab_Repro/main/Big_CEO.csv") #read in the data from github
data <- data["2019-03-01"<= data$Date & data$Date <= "2021-04-01",] #subsetting covid dates
data <- data %>% filter(WC<=5400) %>% #filter out based on our exclusion criteria
filter(WC>=25)
data$month_year <- format(as.Date(data$Date), "%Y-%m") #reformat
data_tidy <- data %>% dplyr::select(Date, Speaker, Analytic, cogproc,allnone,we,i,emo_anx) %>%
mutate(Date = lubridate::ymd(Date),
time_month = as.numeric(Date - ymd("2019-03-01")) / 30, #centering at start of march
time_month_quad = time_month * time_month) #making our quadratic term
data_tidy$Date_off <- floor(data_tidy$time_month) #rounding off dates to whole months using ceiling function (0 = 2019-03, 24 = 2021-04)
data_tidy$Date_covid <- as.factor(data_tidy$Date_off) #factorize
Create Tidy Data for Graphs
df <- read_csv("https://raw.githubusercontent.com/scm1210/Language_Lab_Repro/main/Big_CEO.csv")#put code here to read in Big CEO data
df <- df %>% filter(WC<=5400) %>%
filter(WC>=25)
df$month_year <- format(as.Date(df$Date), "%Y-%m") ###extracting month and year to build fiscal quarter graphs, need a new variable bc if not it'll give us issues
df2 <- df %>%#converting our dates to quarterly dates
group_by(month_year) %>% ###grouping by the Top100 tag and date
summarise_at(vars("Date","WC","Analytic","cogproc",'we','i'), funs(mean, std.error),) #pulling the means and SEs for our variables of interest
df2 <- df2["2019-01"<= df2$month_year & df2$month_year <= "2021-03",] #covid dates
Write our Stats Functions
We were interested in how language changed relative to baseline one
year pre-pandemic, as well as how language changed after the
Pandemic.
As a result we ran two separate set of analyses comparing t(time
zero) to t[i] and t(12 months after our centered data point) to t + 1.
The groups you see will be centered on 03/2019. That is, 12 = 03/2020,
13 = 04/2020, etc. etc.
Analytic Thinking
analytic_my.t = function(fac1, fac2){
t.test(data_tidy$Analytic[data_tidy$Date_covid==fac1],
data_tidy$Analytic[data_tidy$Date_covid==fac2])
} #writing our t-test function to compare t to t[i]
analytic_my.d = function(fac1, fac2){
cohen.d(data_tidy$Analytic[data_tidy$Date_covid==fac1],
data_tidy$Analytic[data_tidy$Date_covid==fac2])
} #function for cohen's d
analytic_mean <- function(fac1, fac2){
mean(data_tidy$Analytic[data_tidy$Date_covid==fac1])-
mean(data_tidy$Analytic[data_tidy$Date_covid==fac2])
} #function to do mean differences
Cognitive Processing
cogproc_my.t = function(fac1, fac2){
t.test(data_tidy$cogproc[data_tidy$Date_covid==fac1],
data_tidy$cogproc[data_tidy$Date_covid==fac2])
} #writing our t-test function to compare t to t[i]
cogproc_my.d = function(fac1, fac2){
cohen.d(data_tidy$cogproc[data_tidy$Date_covid==fac1],
data_tidy$cogproc[data_tidy$Date_covid==fac2])
} #function for cohen's d
cogproc_mean <- function(fac1, fac2){
mean(data_tidy$cogproc[data_tidy$Date_covid==fac1])-
mean(data_tidy$cogproc[data_tidy$Date_covid==fac2])
} #function to do mean differences
I-words
i_my.t = function(fac1, fac2){
t.test(data_tidy$i[data_tidy$Date_covid==fac1],
data_tidy$i[data_tidy$Date_covid==fac2])
} #writing our t-test function to compare t to t + 1
i_my.d = function(fac1, fac2){
cohen.d(data_tidy$i[data_tidy$Date_covid==fac1],
data_tidy$i[data_tidy$Date_covid==fac2])
} #function for cohen's d
i_mean <- function(fac1, fac2){
mean(data_tidy$i[data_tidy$Date_covid==fac1])-
mean(data_tidy$i[data_tidy$Date_covid==fac2])
} #function to do mean differences
We-words
we_my.t = function(fac1, fac2){
t.test(data_tidy$we[data_tidy$Date_covid==fac1],
data_tidy$we[data_tidy$Date_covid==fac2])
}
we_my.d = function(fac1, fac2){
cohen.d(data_tidy$we[data_tidy$Date_covid==fac1],
data_tidy$we[data_tidy$Date_covid==fac2])
} #function for cohen's d
we_mean <- function(fac1, fac2){
mean(data_tidy$we[data_tidy$Date_covid==fac1])-
mean(data_tidy$we[data_tidy$Date_covid==fac2])
} #function to do mean differences
Tidy data
Data transformations
Exclusions
- Excluded texts that were shorter than ** 25 words ** and greater
than ** 5,400 words **!
Summary of the Data
Range of Dates
## [1] "2019-03-01" "2021-04-01"
Number of Speakers
speakers <- data %>%
select(Speaker) %>%
unique() %>%
dplyr::summarize(n = n()) %>%
reactable::reactable(striped = TRUE)
speakers
Number of Transcripts
transcripts <- data %>%
select(1) %>%
dplyr::summarize(n = n()) %>%
reactable::reactable(striped = TRUE)
transcripts
Mean Word Count
word_count <- data %>%
select(WC) %>%
dplyr::summarize(mean = mean(WC)) %>%
reactable::reactable(striped = TRUE)
word_count
How did language change after the Pandemic?
Analytic Thinking
T-test
analytic_ttest<- mapply(analytic_my.t,seq(12,23,1), seq(13,24,1),SIMPLIFY=F) #compare t (first parantheses) to t[i] (second parentheses)increasing by 1
post_pandemic_summary(analytic_ttest)
Summary of Welch’s t-Tests
|
t
|
t + 1
|
t-value
|
Degrees of Freedom
|
p-value
|
|
12
|
13
|
5.0848767
|
525.79320
|
5.124345e-07
|
|
13
|
14
|
-2.5947675
|
373.06404
|
9.838752e-03
|
|
14
|
15
|
-1.6725600
|
252.03534
|
9.565479e-02
|
|
15
|
16
|
1.9241684
|
377.61680
|
5.508471e-02
|
|
16
|
17
|
-2.2121608
|
200.57005
|
2.808412e-02
|
|
17
|
18
|
-1.6872236
|
218.93267
|
9.298455e-02
|
|
18
|
19
|
0.6199376
|
262.60906
|
5.358364e-01
|
|
19
|
20
|
0.8737898
|
128.21711
|
3.838664e-01
|
|
20
|
21
|
-1.5397962
|
230.75591
|
1.249802e-01
|
|
21
|
22
|
1.9533418
|
94.31745
|
5.374259e-02
|
|
22
|
23
|
-1.1497811
|
55.55164
|
2.551600e-01
|
|
23
|
24
|
-1.7179003
|
2141.37219
|
8.595937e-02
|
Cohen’s D
analytic_d <- mapply(analytic_my.d,seq(12,23,1), seq(13,24,1),SIMPLIFY=FALSE)
post_cohen_d(analytic_d)
Summary of Cohen’s D
|
t
|
t+1
|
Cohen’s d
|
|
12
|
13
|
0.3274589
|
|
13
|
14
|
-0.1597933
|
|
14
|
15
|
-0.1320224
|
|
15
|
16
|
0.1935631
|
|
16
|
17
|
-0.1616992
|
|
17
|
18
|
-0.1481301
|
|
18
|
19
|
0.0709701
|
|
19
|
20
|
0.0898748
|
|
20
|
21
|
-0.1246402
|
|
21
|
22
|
0.2681803
|
|
22
|
23
|
-0.1598304
|
|
23
|
24
|
-0.0739462
|
Mean Differences
analytic_meandiff <- mapply(analytic_mean, seq(12,23,1), seq(13,24,1)) #across all of the months comparing to time zero
post_mean_diff(analytic_meandiff)
Summary of Mean Differences
|
t
|
t+1
|
Mean Difference
|
|
12
|
13
|
4.734622
|
|
13
|
14
|
-2.190455
|
|
14
|
15
|
-1.844328
|
|
15
|
16
|
2.748318
|
|
16
|
17
|
-2.231753
|
|
17
|
18
|
-2.101267
|
|
18
|
19
|
1.158869
|
|
19
|
20
|
1.276462
|
|
20
|
21
|
-1.779122
|
|
21
|
22
|
4.065080
|
|
22
|
23
|
-2.075629
|
|
23
|
24
|
-0.994088
|
Cogproc
T-test
cogproc_ttest <-mapply(cogproc_my.t, seq(12,23,1), seq(13,24,1),SIMPLIFY=FALSE) #compare t (first parathese) to t[i] (second parantheses) increasing by 1
post_pandemic_summary(cogproc_ttest)
Summary of Welch’s t-Tests
|
t
|
t + 1
|
t-value
|
Degrees of Freedom
|
p-value
|
|
12
|
13
|
-4.3160945
|
534.57336
|
1.892660e-05
|
|
13
|
14
|
1.4046015
|
366.53625
|
1.609866e-01
|
|
14
|
15
|
4.0193476
|
257.86515
|
7.665356e-05
|
|
15
|
16
|
-3.1317117
|
367.29975
|
1.877275e-03
|
|
16
|
17
|
0.9867920
|
199.23919
|
3.249415e-01
|
|
17
|
18
|
4.1803820
|
223.61017
|
4.177506e-05
|
|
18
|
19
|
-1.1984064
|
285.88282
|
2.317513e-01
|
|
19
|
20
|
-1.4929615
|
133.61894
|
1.378047e-01
|
|
20
|
21
|
3.2109343
|
234.84605
|
1.508000e-03
|
|
21
|
22
|
-1.7045407
|
87.34608
|
9.183489e-02
|
|
22
|
23
|
0.9967763
|
55.37573
|
3.232089e-01
|
|
23
|
24
|
-0.9994281
|
2145.12748
|
3.177001e-01
|
Cohen’s D
cogproc_d <-mapply(cogproc_my.d, seq(12,23,1), seq(13,24,1),SIMPLIFY=FALSE)
post_cohen_d(cogproc_d)
Summary of Cohen’s D
|
t
|
t+1
|
Cohen’s d
|
|
12
|
13
|
-0.2755415
|
|
13
|
14
|
0.0887056
|
|
14
|
15
|
0.3007241
|
|
15
|
16
|
-0.3204553
|
|
16
|
17
|
0.0732556
|
|
17
|
18
|
0.3435609
|
|
18
|
19
|
-0.1329353
|
|
19
|
20
|
-0.1294167
|
|
20
|
21
|
0.2476709
|
|
21
|
22
|
-0.2453381
|
|
22
|
23
|
0.1405453
|
|
23
|
24
|
-0.0429758
|
Mean Differences
cogproc_meandiff <- mapply(cogproc_mean, seq(12,23,1), seq(13,24,1)) # comparing time zero [3/2019]across all of the months
post_mean_diff(cogproc_meandiff)
Summary of Mean Differences
|
t
|
t+1
|
Mean Difference
|
|
12
|
13
|
-0.6107287
|
|
13
|
14
|
0.1784774
|
|
14
|
15
|
0.6094504
|
|
15
|
16
|
-0.6540232
|
|
16
|
17
|
0.1559844
|
|
17
|
18
|
0.7442075
|
|
18
|
19
|
-0.2962170
|
|
19
|
20
|
-0.2746360
|
|
20
|
21
|
0.5304979
|
|
21
|
22
|
-0.5357971
|
|
22
|
23
|
0.2775877
|
|
23
|
24
|
-0.0886600
|
I-words
T-test
i_ttest <- mapply(i_my.t, seq(12,23,1), seq(13,24,1),SIMPLIFY=FALSE) #compare t (first paratheses) to t[i] (second parentheses) increasing by 1
post_pandemic_summary(i_ttest)
Summary of Welch’s t-Tests
|
t
|
t + 1
|
t-value
|
Degrees of Freedom
|
p-value
|
|
12
|
13
|
-5.1026305
|
477.85082
|
4.841738e-07
|
|
13
|
14
|
2.9682570
|
362.96961
|
3.193717e-03
|
|
14
|
15
|
2.7352278
|
261.20479
|
6.660709e-03
|
|
15
|
16
|
-3.5894844
|
336.98113
|
3.805206e-04
|
|
16
|
17
|
1.7614255
|
191.52014
|
7.976208e-02
|
|
17
|
18
|
3.4393905
|
240.73312
|
6.870032e-04
|
|
18
|
19
|
-2.6019091
|
255.11065
|
9.812584e-03
|
|
19
|
20
|
0.4503223
|
134.90596
|
6.532009e-01
|
|
20
|
21
|
1.5059378
|
248.77332
|
1.333518e-01
|
|
21
|
22
|
2.0158644
|
84.28386
|
4.699962e-02
|
|
22
|
23
|
-3.8068297
|
57.55886
|
3.436805e-04
|
|
23
|
24
|
4.4094793
|
2135.84040
|
1.087616e-05
|
Cohen’s D
i_d <- mapply(i_my.d,seq(12,23,1), seq(13,24,1),SIMPLIFY=FALSE)
post_cohen_d(i_d)
Summary of Cohen’s D
|
t
|
t+1
|
Cohen’s d
|
|
12
|
13
|
-0.3467518
|
|
13
|
14
|
0.1902125
|
|
14
|
15
|
0.1990807
|
|
15
|
16
|
-0.3757604
|
|
16
|
17
|
0.1451672
|
|
17
|
18
|
0.2369631
|
|
18
|
19
|
-0.3007221
|
|
19
|
20
|
0.0377993
|
|
20
|
21
|
0.1020099
|
|
21
|
22
|
0.2971566
|
|
22
|
23
|
-0.4621942
|
|
23
|
24
|
0.1900173
|
Mean Differences
i_meandiff <- mapply(i_mean,seq(12,23,1), seq(13,24,1)) # comparing time zero [3/2020]across all of the months
post_mean_diff(i_meandiff)
Summary of Mean Differences
|
t
|
t+1
|
Mean Difference
|
|
12
|
13
|
-0.2878044
|
|
13
|
14
|
0.1550533
|
|
14
|
15
|
0.1624754
|
|
15
|
16
|
-0.3241516
|
|
16
|
17
|
0.1289192
|
|
17
|
18
|
0.2083141
|
|
18
|
19
|
-0.2363725
|
|
19
|
20
|
0.0329017
|
|
20
|
21
|
0.0885966
|
|
21
|
22
|
0.2292627
|
|
22
|
23
|
-0.3911951
|
|
23
|
24
|
0.1657095
|
We-words
T-test
we_ttest <- mapply(we_my.t, seq(12,23,1), seq(13,24,1),SIMPLIFY=FALSE) #compare t (first parathese) to t[i] (second parantheses) increasing by 1
post_pandemic_summary(we_ttest)
Summary of Welch’s t-Tests
|
t
|
t + 1
|
t-value
|
Degrees of Freedom
|
p-value
|
|
12
|
13
|
4.1037791
|
527.07583
|
4.708824e-05
|
|
13
|
14
|
0.9116989
|
378.81928
|
3.625070e-01
|
|
14
|
15
|
-3.3226285
|
253.13940
|
1.023448e-03
|
|
15
|
16
|
2.4647106
|
373.96103
|
1.416113e-02
|
|
16
|
17
|
-0.3375119
|
197.51750
|
7.360894e-01
|
|
17
|
18
|
-4.2758502
|
229.49548
|
2.793946e-05
|
|
18
|
19
|
2.5509775
|
262.60210
|
1.130991e-02
|
|
19
|
20
|
-0.1421962
|
131.79434
|
8.871422e-01
|
|
20
|
21
|
-1.9395335
|
238.21223
|
5.361708e-02
|
|
21
|
22
|
-0.2952385
|
84.06212
|
7.685396e-01
|
|
22
|
23
|
0.8556597
|
55.76358
|
3.958478e-01
|
|
23
|
24
|
-0.3495394
|
2137.76534
|
7.267188e-01
|
Cohen’s D
we_d <- mapply(we_my.d, seq(12,23,1), seq(13,24,1),SIMPLIFY=FALSE)
post_cohen_d(we_d)
Summary of Cohen’s D
|
t
|
t+1
|
Cohen’s d
|
|
12
|
13
|
0.2639367
|
|
13
|
14
|
0.0549934
|
|
14
|
15
|
-0.2594704
|
|
15
|
16
|
0.2501259
|
|
16
|
17
|
-0.0255875
|
|
17
|
18
|
-0.3276203
|
|
18
|
19
|
0.2920369
|
|
19
|
20
|
-0.0129636
|
|
20
|
21
|
-0.1443587
|
|
21
|
22
|
-0.0435999
|
|
22
|
23
|
0.1169953
|
|
23
|
24
|
-0.0150573
|
Mean Differences
we_meandiff <- mapply(we_mean, seq(12,23,1), seq(13,24,1)) # comparing time zero [3/2020]across all of the months
post_mean_diff(we_meandiff)
Summary of Mean Differences
|
t
|
t+1
|
Mean Difference
|
|
12
|
13
|
0.3777932
|
|
13
|
14
|
0.0763380
|
|
14
|
15
|
-0.3676046
|
|
15
|
16
|
0.3649285
|
|
16
|
17
|
-0.0365235
|
|
17
|
18
|
-0.4710551
|
|
18
|
19
|
0.4168557
|
|
19
|
20
|
-0.0182846
|
|
20
|
21
|
-0.2041654
|
|
21
|
22
|
-0.0608833
|
|
22
|
23
|
0.1582888
|
|
23
|
24
|
-0.0209555
|
How did language change relative to baseline (one year before the
pandemic; 03/2019)?
Analytic Thining
T-test
analytic_ttest_baseline <-mapply(analytic_my.t,0, seq(1,24,1),SIMPLIFY=FALSE) #compare t (first parantheses) to t[i] (second parentheses)increasing by 1
baseline_ttest(analytic_ttest_baseline)
Summary of Welch’s t-Tests
|
t
|
t + 1
|
t-value
|
Degrees of Freedom
|
p-value
|
|
0
|
1
|
1.5025175
|
1161.46333
|
1.332353e-01
|
|
0
|
2
|
0.6860139
|
1036.84856
|
4.928577e-01
|
|
0
|
3
|
0.2507930
|
245.14330
|
8.021842e-01
|
|
0
|
4
|
2.6728372
|
1120.10414
|
7.630544e-03
|
|
0
|
5
|
0.4785480
|
1004.80108
|
6.323643e-01
|
|
0
|
6
|
1.0343183
|
280.42507
|
3.018785e-01
|
|
0
|
7
|
2.6674817
|
1049.94370
|
7.759826e-03
|
|
0
|
8
|
1.4045584
|
993.35140
|
1.604652e-01
|
|
0
|
9
|
1.0147460
|
328.09331
|
3.109746e-01
|
|
0
|
10
|
1.5505263
|
286.24028
|
1.221201e-01
|
|
0
|
11
|
1.9737798
|
1061.63898
|
4.866575e-02
|
|
0
|
12
|
1.3053906
|
1272.10102
|
1.919959e-01
|
|
0
|
13
|
5.7769548
|
623.93692
|
1.200948e-08
|
|
0
|
14
|
5.1516350
|
929.47739
|
3.153290e-07
|
|
0
|
15
|
1.4218984
|
370.16499
|
1.558977e-01
|
|
0
|
16
|
3.9258380
|
316.92397
|
1.060657e-04
|
|
0
|
17
|
3.2571976
|
918.08556
|
1.166437e-03
|
|
0
|
18
|
0.1171218
|
302.23369
|
9.068413e-01
|
|
0
|
19
|
0.8462858
|
164.42299
|
3.986233e-01
|
|
0
|
20
|
3.7364297
|
920.43945
|
1.981471e-04
|
|
0
|
21
|
0.6393118
|
331.79337
|
5.230612e-01
|
|
0
|
22
|
2.6168435
|
63.20064
|
1.108971e-02
|
|
0
|
23
|
3.7686675
|
1111.95144
|
1.727388e-04
|
|
0
|
24
|
2.4325523
|
1125.18803
|
1.514789e-02
|
Cohen’s D
analytic_D_baseline <- mapply(analytic_my.d,0, seq(1,24,1),SIMPLIFY=FALSE)
baseline_cohen_d(analytic_D_baseline)
Summary of Cohen’s D
|
t
|
t + 1
|
Cohen’s d
|
|
0
|
1
|
0.0879752
|
|
0
|
2
|
0.0329980
|
|
0
|
3
|
0.0206107
|
|
0
|
4
|
0.1587215
|
|
0
|
5
|
0.0235235
|
|
0
|
6
|
0.0867045
|
|
0
|
7
|
0.1620807
|
|
0
|
8
|
0.0687147
|
|
0
|
9
|
0.0805849
|
|
0
|
10
|
0.1282654
|
|
0
|
11
|
0.1023933
|
|
0
|
12
|
0.0694416
|
|
0
|
13
|
0.3954264
|
|
0
|
14
|
0.2534133
|
|
0
|
15
|
0.1138341
|
|
0
|
16
|
0.3057368
|
|
0
|
17
|
0.1588173
|
|
0
|
18
|
0.0101558
|
|
0
|
19
|
0.0861013
|
|
0
|
20
|
0.1802980
|
|
0
|
21
|
0.0529819
|
|
0
|
22
|
0.3237240
|
|
0
|
23
|
0.2018620
|
|
0
|
24
|
0.1262979
|
Mean Differences
analytic_mean_baseline <- mapply(analytic_mean, 0, seq(1,24,1)) #across all of the months comparing to time zero
baseline_mean_diff(analytic_mean_baseline)
Summary of Mean Differences
|
t
|
t+1
|
Mean Difference
|
|
0
|
1
|
1.3114081
|
|
0
|
2
|
0.4935284
|
|
0
|
3
|
0.3039970
|
|
0
|
4
|
2.3251490
|
|
0
|
5
|
0.3411544
|
|
0
|
6
|
1.3027809
|
|
0
|
7
|
2.3954214
|
|
0
|
8
|
0.9976299
|
|
0
|
9
|
1.1986758
|
|
0
|
10
|
1.9188652
|
|
0
|
11
|
1.4369448
|
|
0
|
12
|
1.0438407
|
|
0
|
13
|
5.7784625
|
|
0
|
14
|
3.5880071
|
|
0
|
15
|
1.7436794
|
|
0
|
16
|
4.4919977
|
|
0
|
17
|
2.2602447
|
|
0
|
18
|
0.1589776
|
|
0
|
19
|
1.3178462
|
|
0
|
20
|
2.5943085
|
|
0
|
21
|
0.8151869
|
|
0
|
22
|
4.8802673
|
|
0
|
23
|
2.8046380
|
|
0
|
24
|
1.8105501
|
Cogproc
T-test
cogproc_ttest_baseline <- mapply(cogproc_my.t, 0, seq(1,24,1),SIMPLIFY=FALSE) #compare t (first parathese) to t[i] (second parantheses) increasing by 1
baseline_ttest(cogproc_ttest_baseline)
Summary of Welch’s t-Tests
|
t
|
t + 1
|
t-value
|
Degrees of Freedom
|
p-value
|
|
0
|
1
|
-0.5097155
|
1156.50973
|
6.103480e-01
|
|
0
|
2
|
-0.7178587
|
1035.96962
|
4.730063e-01
|
|
0
|
3
|
-0.2391309
|
218.72044
|
8.112280e-01
|
|
0
|
4
|
-1.8416817
|
1119.69687
|
6.578607e-02
|
|
0
|
5
|
-0.3763500
|
1051.93803
|
7.067326e-01
|
|
0
|
6
|
0.2442296
|
282.79380
|
8.072301e-01
|
|
0
|
7
|
-1.7141683
|
1029.21251
|
8.679890e-02
|
|
0
|
8
|
-0.9538148
|
1076.64206
|
3.403915e-01
|
|
0
|
9
|
1.0445702
|
320.30692
|
2.970093e-01
|
|
0
|
10
|
-0.8168779
|
255.25892
|
4.147599e-01
|
|
0
|
11
|
-0.7245359
|
1147.57474
|
4.688845e-01
|
|
0
|
12
|
-2.0279981
|
1307.90475
|
4.276280e-02
|
|
0
|
13
|
-5.7012479
|
609.24510
|
1.854777e-08
|
|
0
|
14
|
-6.5910797
|
924.04251
|
7.328808e-11
|
|
0
|
15
|
-0.3855580
|
395.99482
|
7.000311e-01
|
|
0
|
16
|
-4.0811802
|
298.22073
|
5.758392e-05
|
|
0
|
17
|
-5.4650159
|
949.00294
|
5.916345e-08
|
|
0
|
18
|
0.9264753
|
310.66778
|
3.549182e-01
|
|
0
|
19
|
-0.5797074
|
184.73797
|
5.628182e-01
|
|
0
|
20
|
-3.7993649
|
936.80835
|
1.544264e-04
|
|
0
|
21
|
0.7639021
|
341.61474
|
4.454529e-01
|
|
0
|
22
|
-1.3820415
|
61.97292
|
1.719203e-01
|
|
0
|
23
|
-1.0690564
|
1140.02341
|
2.852706e-01
|
|
0
|
24
|
-1.8593187
|
1172.33479
|
6.323237e-02
|
Cohen’s D
cogproc_D_baseline <- mapply(cogproc_my.d, 0, seq(1,24,1),SIMPLIFY=FALSE)
baseline_cohen_d(cogproc_D_baseline)
Summary of Cohen’s D
|
t
|
t + 1
|
Cohen’s d
|
|
0
|
1
|
-0.0298959
|
|
0
|
2
|
-0.0345459
|
|
0
|
3
|
-0.0213194
|
|
0
|
4
|
-0.1093919
|
|
0
|
5
|
-0.0180369
|
|
0
|
6
|
0.0203613
|
|
0
|
7
|
-0.1048291
|
|
0
|
8
|
-0.0445936
|
|
0
|
9
|
0.0841121
|
|
0
|
10
|
-0.0731906
|
|
0
|
11
|
-0.0364241
|
|
0
|
12
|
-0.1070381
|
|
0
|
13
|
-0.3938811
|
|
0
|
14
|
-0.3255788
|
|
0
|
15
|
-0.0297828
|
|
0
|
16
|
-0.3291694
|
|
0
|
17
|
-0.2601030
|
|
0
|
18
|
0.0788773
|
|
0
|
19
|
-0.0527050
|
|
0
|
20
|
-0.1809343
|
|
0
|
21
|
0.0622160
|
|
0
|
22
|
-0.1777619
|
|
0
|
23
|
-0.0568265
|
|
0
|
24
|
-0.0951265
|
Mean Differences
cogproc_mean_baseline <- mapply(cogproc_mean, 0, seq(1,24,1)) # comparing time zero [3/2020]across all of the months
baseline_mean_diff(cogproc_meandiff)
Summary of Mean Differences
|
t
|
t+1
|
Mean Difference
|
|
0
|
1
|
-0.6107287
|
|
0
|
2
|
0.1784774
|
|
0
|
3
|
0.6094504
|
|
0
|
4
|
-0.6540232
|
|
0
|
5
|
0.1559844
|
|
0
|
6
|
0.7442075
|
|
0
|
7
|
-0.2962170
|
|
0
|
8
|
-0.2746360
|
|
0
|
9
|
0.5304979
|
|
0
|
10
|
-0.5357971
|
|
0
|
11
|
0.2775877
|
|
0
|
12
|
-0.0886600
|
|
0
|
13
|
-0.6107287
|
|
0
|
14
|
0.1784774
|
|
0
|
15
|
0.6094504
|
|
0
|
16
|
-0.6540232
|
|
0
|
17
|
0.1559844
|
|
0
|
18
|
0.7442075
|
|
0
|
19
|
-0.2962170
|
|
0
|
20
|
-0.2746360
|
|
0
|
21
|
0.5304979
|
|
0
|
22
|
-0.5357971
|
|
0
|
23
|
0.2775877
|
|
0
|
24
|
-0.0886600
|
I-words
T-test
i_ttest_baseline <- mapply(i_my.t, 0, seq(1,24,1),SIMPLIFY=FALSE) #compare t (first paratheseses) to t[i] (second parentheses) increasing by 1
baseline_ttest(i_ttest_baseline)
Summary of Welch’s t-Tests
|
t
|
t + 1
|
t-value
|
Degrees of Freedom
|
p-value
|
|
0
|
1
|
-3.3449936
|
1143.81760
|
8.495412e-04
|
|
0
|
2
|
-1.1963077
|
1155.18280
|
2.318220e-01
|
|
0
|
3
|
-0.1911368
|
213.55326
|
8.486000e-01
|
|
0
|
4
|
-4.1439455
|
1114.30669
|
3.672274e-05
|
|
0
|
5
|
-0.6476795
|
1056.55931
|
5.173329e-01
|
|
0
|
6
|
-1.6111266
|
278.02962
|
1.082868e-01
|
|
0
|
7
|
-3.3533234
|
1035.23122
|
8.273950e-04
|
|
0
|
8
|
-2.0582130
|
1066.95830
|
3.981213e-02
|
|
0
|
9
|
-1.4167584
|
265.19170
|
1.577272e-01
|
|
0
|
10
|
-2.7747321
|
284.30487
|
5.890772e-03
|
|
0
|
11
|
-1.9848824
|
1154.30486
|
4.739397e-02
|
|
0
|
12
|
-0.3319999
|
1263.49829
|
7.399444e-01
|
|
0
|
13
|
-5.0279656
|
571.48514
|
6.644118e-07
|
|
0
|
14
|
-3.7092732
|
958.88047
|
2.197939e-04
|
|
0
|
15
|
0.2214347
|
390.57770
|
8.248697e-01
|
|
0
|
16
|
-3.9254650
|
253.43509
|
1.115955e-04
|
|
0
|
17
|
-4.4733002
|
1005.42169
|
8.580050e-06
|
|
0
|
18
|
0.4134953
|
350.62439
|
6.794966e-01
|
|
0
|
19
|
-2.6459757
|
180.59824
|
8.864330e-03
|
|
0
|
20
|
-4.3779164
|
986.11105
|
1.326045e-05
|
|
0
|
21
|
-1.3221651
|
371.12983
|
1.869275e-01
|
|
0
|
22
|
1.3507586
|
63.33638
|
1.815790e-01
|
|
0
|
23
|
-5.6222563
|
1250.83820
|
2.322252e-08
|
|
0
|
24
|
-1.8930601
|
1254.79690
|
5.857980e-02
|
Cohen’s D
i_D_baseline <- mapply(i_my.d, 0, seq(1,24,1),SIMPLIFY=FALSE)
baseline_cohen_d(i_D_baseline)
Summary of Cohen’s D
|
t
|
t + 1
|
Cohen’s d
|
|
0
|
1
|
-0.1965974
|
|
0
|
2
|
-0.0543981
|
|
0
|
3
|
-0.0173720
|
|
0
|
4
|
-0.2467407
|
|
0
|
5
|
-0.0309676
|
|
0
|
6
|
-0.1358241
|
|
0
|
7
|
-0.2047181
|
|
0
|
8
|
-0.0966976
|
|
0
|
9
|
-0.1296303
|
|
0
|
10
|
-0.2305339
|
|
0
|
11
|
-0.0995545
|
|
0
|
12
|
-0.0176937
|
|
0
|
13
|
-0.3562055
|
|
0
|
14
|
-0.1785725
|
|
0
|
15
|
0.0172266
|
|
0
|
16
|
-0.3536629
|
|
0
|
17
|
-0.2047237
|
|
0
|
18
|
0.0327380
|
|
0
|
19
|
-0.2453415
|
|
0
|
20
|
-0.2010721
|
|
0
|
21
|
-0.1028381
|
|
0
|
22
|
0.1664219
|
|
0
|
23
|
-0.2903836
|
|
0
|
24
|
-0.0945412
|
Mean Differences
i_mean_baseline <- mapply(i_mean, 0, seq(1,24,1)) # comparing time zero [3/2020]across all of the months
baseline_mean_diff(i_mean_baseline)
Summary of Mean Differences
|
t
|
t+1
|
Mean Difference
|
|
0
|
1
|
-0.1747670
|
|
0
|
2
|
-0.0504304
|
|
0
|
3
|
-0.0148774
|
|
0
|
4
|
-0.2082233
|
|
0
|
5
|
-0.0265697
|
|
0
|
6
|
-0.1159251
|
|
0
|
7
|
-0.1744079
|
|
0
|
8
|
-0.0846426
|
|
0
|
9
|
-0.1162156
|
|
0
|
10
|
-0.1958046
|
|
0
|
11
|
-0.0842683
|
|
0
|
12
|
-0.0149918
|
|
0
|
13
|
-0.3027962
|
|
0
|
14
|
-0.1477429
|
|
0
|
15
|
0.0147325
|
|
0
|
16
|
-0.3094191
|
|
0
|
17
|
-0.1804999
|
|
0
|
18
|
0.0278142
|
|
0
|
19
|
-0.2085583
|
|
0
|
20
|
-0.1756567
|
|
0
|
21
|
-0.0870600
|
|
0
|
22
|
0.1422027
|
|
0
|
23
|
-0.2489924
|
|
0
|
24
|
-0.0832828
|
We-words
T-test
we_ttest_baseline <- mapply(we_my.t, 0, seq(1,24,1),SIMPLIFY=FALSE) #compare t (first parathese) to t[i] (second parantheses) increasing by 1
baseline_ttest(we_ttest_baseline)
Summary of Welch’s t-Tests
|
t
|
t + 1
|
t-value
|
Degrees of Freedom
|
p-value
|
|
0
|
1
|
0.5717846
|
1161.88366
|
5.675785e-01
|
|
0
|
2
|
1.5919359
|
1008.44599
|
1.117125e-01
|
|
0
|
3
|
-1.0685461
|
214.74566
|
2.864739e-01
|
|
0
|
4
|
0.6153736
|
1116.22594
|
5.384335e-01
|
|
0
|
5
|
0.9396382
|
979.10341
|
3.476349e-01
|
|
0
|
6
|
-1.1795694
|
280.31623
|
2.391716e-01
|
|
0
|
7
|
-0.2036380
|
1067.87587
|
8.386752e-01
|
|
0
|
8
|
0.6497173
|
972.54306
|
5.160283e-01
|
|
0
|
9
|
-0.6307460
|
351.28994
|
5.286168e-01
|
|
0
|
10
|
-0.9676900
|
309.04342
|
3.339559e-01
|
|
0
|
11
|
-0.9267542
|
1073.79066
|
3.542624e-01
|
|
0
|
12
|
-0.4001689
|
1197.17272
|
6.891035e-01
|
|
0
|
13
|
3.3603937
|
676.58875
|
8.220450e-04
|
|
0
|
14
|
5.6600766
|
890.33555
|
2.040178e-08
|
|
0
|
15
|
0.4231819
|
395.82346
|
6.723924e-01
|
|
0
|
16
|
3.3898118
|
317.82041
|
7.875779e-04
|
|
0
|
17
|
5.1355858
|
889.19740
|
3.456716e-07
|
|
0
|
18
|
-0.7164395
|
361.98379
|
4.741820e-01
|
|
0
|
19
|
2.3093761
|
191.37725
|
2.199015e-02
|
|
0
|
20
|
4.1802245
|
873.54302
|
3.205482e-05
|
|
0
|
21
|
0.8666887
|
390.06057
|
3.866454e-01
|
|
0
|
22
|
0.2287513
|
64.77158
|
8.197829e-01
|
|
0
|
23
|
2.5427088
|
1081.13071
|
1.113820e-02
|
|
0
|
24
|
2.2872647
|
1080.95421
|
2.237292e-02
|
Cohen’s D
we_D_baseline <- mapply(we_my.d, 0, seq(1,24,1),SIMPLIFY=FALSE)
baseline_cohen_d(we_D_baseline)
Summary of Cohen’s D
|
t
|
t + 1
|
Cohen’s d
|
|
0
|
1
|
0.0334412
|
|
0
|
2
|
0.0777773
|
|
0
|
3
|
-0.0966754
|
|
0
|
4
|
0.0362120
|
|
0
|
5
|
0.0468851
|
|
0
|
6
|
-0.0989057
|
|
0
|
7
|
-0.0122764
|
|
0
|
8
|
0.0321927
|
|
0
|
9
|
-0.0482579
|
|
0
|
10
|
-0.0764371
|
|
0
|
11
|
-0.0478523
|
|
0
|
12
|
-0.0216259
|
|
0
|
13
|
0.2228626
|
|
0
|
14
|
0.2873740
|
|
0
|
15
|
0.0326963
|
|
0
|
16
|
0.2635803
|
|
0
|
17
|
0.2566654
|
|
0
|
18
|
-0.0557482
|
|
0
|
19
|
0.2039772
|
|
0
|
20
|
0.2102911
|
|
0
|
21
|
0.0657068
|
|
0
|
22
|
0.0270689
|
|
0
|
23
|
0.1373736
|
|
0
|
24
|
0.1204946
|
Mean Differences
we_mean_baseline <- mapply(we_mean, 0, seq(1,24,1)) # comparing time zero [3/2020]across all of the months
baseline_mean_diff(we_mean_baseline)
Summary of Mean Differences
|
t
|
t+1
|
Mean Difference
|
|
0
|
1
|
0.0530735
|
|
0
|
2
|
0.1226640
|
|
0
|
3
|
-0.1575023
|
|
0
|
4
|
0.0544833
|
|
0
|
5
|
0.0717923
|
|
0
|
6
|
-0.1604908
|
|
0
|
7
|
-0.0190853
|
|
0
|
8
|
0.0495303
|
|
0
|
9
|
-0.0765531
|
|
0
|
10
|
-0.1217559
|
|
0
|
11
|
-0.0731274
|
|
0
|
12
|
-0.0334520
|
|
0
|
13
|
0.3443412
|
|
0
|
14
|
0.4206792
|
|
0
|
15
|
0.0530747
|
|
0
|
16
|
0.4180032
|
|
0
|
17
|
0.3814797
|
|
0
|
18
|
-0.0895754
|
|
0
|
19
|
0.3272803
|
|
0
|
20
|
0.3089956
|
|
0
|
21
|
0.1048303
|
|
0
|
22
|
0.0439469
|
|
0
|
23
|
0.2022358
|
|
0
|
24
|
0.1812803
|
Build our Graphs
Analytic Thinking
Analytic <- ggplot(data=df2, aes(x=Date_mean, y=Analytic_mean, group=1)) +
geom_line(colour = "dodgerblue3") +
scale_x_date(date_breaks = "3 month", date_labels = "%Y-%m") +
geom_ribbon(aes(ymin=Analytic_mean-Analytic_std.error, ymax=Analytic_mean+Analytic_std.error), alpha=0.2) +
ggtitle("Analytic Thinking") +
labs(x = "Month", y = 'Standardized score') +
plot_aes + #here's our plot aes object
geom_vline(xintercept = as.numeric(as.Date("2020-03-01")), linetype = 1) +
geom_rect(data = df2, #summer surge
aes(xmin = as.Date("2020-06-15", "%Y-%m-%d"),
xmax = as.Date("2020-07-20", "%Y-%m-%d"),
ymin = -Inf,
ymax = Inf),
fill = "gray",
alpha = 0.009) +
geom_rect(data = df2, #winter surge
aes(xmin = as.Date("2020-11-15", "%Y-%m-%d"),
xmax = as.Date("2021-01-01", "%Y-%m-%d"),
ymin = -Inf,
ymax = Inf),
fill = "gray",
alpha = 0.009)
Analytic <- Analytic + annotate(geom="text",x=as.Date("2020-07-01"),
y=43,label="Summer 2020 surge", size = 3) +
annotate(geom="text",x=as.Date("2020-12-03"),
y=43,label="Winter 2020 surge", size = 3)
Analytic

Cogproc
Cogproc <- ggplot(data=df2, aes(x=Date_mean, y=cogproc_mean, group=1)) +
geom_line(colour = "dodgerblue3") +
scale_x_date(date_breaks = "3 month", date_labels = "%Y-%m") +
geom_ribbon(aes(ymin=cogproc_mean-cogproc_std.error, ymax=cogproc_mean+cogproc_std.error), alpha=0.2) +
ggtitle("Cognitive Processing") +
labs(x = "Month", y = '% Total Words') +
plot_aes + #here's our plot aes object
geom_vline(xintercept = as.numeric(as.Date("2020-03-01")), linetype = 1) +
geom_rect(data = df2, #summer surge
aes(xmin = as.Date("2020-06-15", "%Y-%m-%d"),
xmax = as.Date("2020-07-20", "%Y-%m-%d"),
ymin = -Inf,
ymax = Inf),
fill = "gray",
alpha = 0.009) +
geom_rect(data = df2, #winter surge
aes(xmin = as.Date("2020-11-15", "%Y-%m-%d"),
xmax = as.Date("2021-01-01", "%Y-%m-%d"),
ymin = -Inf,
ymax = Inf),
fill = "gray",
alpha = 0.009)
Cogproc <- Cogproc + annotate(geom="text",x=as.Date("2020-07-01"),
y=12.5,label="Summer 2020 surge", size = 3) +
annotate(geom="text",x=as.Date("2020-12-03"),
y=12.5,label="Winter 2020 surge", size = 3)
Cogproc

I-words
i <- ggplot(data=df2, aes(x=Date_mean, y=i_mean, group=1)) +
geom_line(colour = "dodgerblue3") +
scale_x_date(date_breaks = "3 month", date_labels = "%Y-%m") +
geom_ribbon(aes(ymin=i_mean-i_std.error, ymax=i_mean+i_std.error), alpha=0.2) +
ggtitle("I-usage") +
labs(x = "Month", y = '% Total Words') +
plot_aes + #here's our plot aes object
geom_vline(xintercept = as.numeric(as.Date("2020-03-01")), linetype = 1) +
geom_rect(data = df2, #summer surge
aes(xmin = as.Date("2020-06-15", "%Y-%m-%d"),
xmax = as.Date("2020-07-20", "%Y-%m-%d"),
ymin = -Inf,
ymax = Inf),
fill = "gray",
alpha = 0.009) +
geom_rect(data = df2, #winter surge
aes(xmin = as.Date("2020-11-15", "%Y-%m-%d"),
xmax = as.Date("2021-01-01", "%Y-%m-%d"),
ymin = -Inf,
ymax = Inf),
fill = "gray",
alpha = 0.009)
i <- i + annotate(geom="text",x=as.Date("2020-07-01"),
y=1.95,label="Summer 2020 surge", size = 3) +
annotate(geom="text",x=as.Date("2020-12-03"),
y=1.95,label="Winter 2020 surge", size = 3)
i

We-words
we <- ggplot(data=df2, aes(x=Date_mean, y=we_mean, group=1)) +
geom_line(colour = "dodgerblue3") +
scale_x_date(date_breaks = "3 month", date_labels = "%Y-%m") +
geom_ribbon(aes(ymin=we_mean-we_std.error, ymax=we_mean+we_std.error), alpha=0.2) +
ggtitle("We-usage") +
labs(x = "Month", y = '% Total Words') +
plot_aes + #here's our plot aes object
geom_vline(xintercept = as.numeric(as.Date("2020-03-01")), linetype = 1) +
geom_rect(data = df2, #summer surge
aes(xmin = as.Date("2020-06-15", "%Y-%m-%d"),
xmax = as.Date("2020-07-20", "%Y-%m-%d"),
ymin = -Inf,
ymax = Inf),
fill = "gray",
alpha = 0.009) +
geom_rect(data = df2, #winter surge
aes(xmin = as.Date("2020-11-15", "%Y-%m-%d"),
xmax = as.Date("2021-01-01", "%Y-%m-%d"),
ymin = -Inf,
ymax = Inf),
fill = "gray",
alpha = 0.009)
we <- we + annotate(geom="text",x=as.Date("2020-07-01"),
y=6.5,label="Summer 2020 surge", size = 3) +
annotate(geom="text",x=as.Date("2020-12-03"),
y=6.5,label="Winter 2020 surge", size = 3)
we

Tie them all together
graphs <- ggpubr::ggarrange(Analytic,Cogproc,i,we,ncol=2, nrow=2, common.legend = TRUE, legend = "bottom")
annotate_figure(graphs,
top = text_grob("CEOs' Language Change", color = "black", face = "bold", size = 20),
bottom = text_grob("Note. Vertical Line Represents the onset of the pandemic. \n\ Horizontal shading represents Standard Error. Vertical bars represent virus surges."
, color = "Black",
hjust = 1.1, x = 1, face = "italic", size = 16))

Package Citations
## - Grolemund G, Wickham H (2011). "Dates and Times Made Easy with lubridate." _Journal of Statistical Software_, *40*(3), 1-25. <https://www.jstatsoft.org/v40/i03/>.
## - J L (2006). "Plotrix: a package in the red light district of R." _R-News_, *6*(4), 8-12.
## - Kassambara A (2020). _ggpubr: 'ggplot2' Based Publication Ready Plots_. R package version 0.4.0, <https://CRAN.R-project.org/package=ggpubr>.
## - Kuhn M (2022). _caret: Classification and Regression Training_. R package version 6.0-93, <https://CRAN.R-project.org/package=caret>.
## - Lin G (2022). _reactable: Interactive Data Tables Based on 'React Table'_. R package version 0.3.0, <https://CRAN.R-project.org/package=reactable>.
## - Müller K, Wickham H (2022). _tibble: Simple Data Frames_. R package version 3.1.8, <https://CRAN.R-project.org/package=tibble>.
## - R Core Team (2022). _R: A Language and Environment for Statistical Computing_. R Foundation for Statistical Computing, Vienna, Austria. <https://www.R-project.org/>.
## - Rinker TW, Kurkiewicz D (2018). _pacman: Package Management for R_. version 0.5.0, <http://github.com/trinker/pacman>.
## - Robinson D, Hayes A, Couch S (2022). _broom: Convert Statistical Objects into Tidy Tibbles_. R package version 1.0.1, <https://CRAN.R-project.org/package=broom>.
## - Sarkar D (2008). _Lattice: Multivariate Data Visualization with R_. Springer, New York. ISBN 978-0-387-75968-5, <http://lmdvr.r-forge.r-project.org>.
## - Spinu V (2022). _timechange: Efficient Manipulation of Date-Times_. R package version 0.1.1, <https://CRAN.R-project.org/package=timechange>.
## - Torchiano M (2020). _effsize: Efficient Effect Size Computation_. doi:10.5281/zenodo.1480624 <https://doi.org/10.5281/zenodo.1480624>, R package version 0.8.1, <https://CRAN.R-project.org/package=effsize>.
## - Wickham H (2016). _ggplot2: Elegant Graphics for Data Analysis_. Springer-Verlag New York. ISBN 978-3-319-24277-4, <https://ggplot2.tidyverse.org>.
## - Wickham H (2022). _forcats: Tools for Working with Categorical Variables (Factors)_. R package version 0.5.2, <https://CRAN.R-project.org/package=forcats>.
## - Wickham H (2022). _stringr: Simple, Consistent Wrappers for Common String Operations_. R package version 1.4.1, <https://CRAN.R-project.org/package=stringr>.
## - Wickham H, Averick M, Bryan J, Chang W, McGowan LD, François R, Grolemund G, Hayes A, Henry L, Hester J, Kuhn M, Pedersen TL, Miller E, Bache SM, Müller K, Ooms J, Robinson D, Seidel DP, Spinu V, Takahashi K, Vaughan D, Wilke C, Woo K, Yutani H (2019). "Welcome to the tidyverse." _Journal of Open Source Software_, *4*(43), 1686. doi:10.21105/joss.01686 <https://doi.org/10.21105/joss.01686>.
## - Wickham H, François R, Henry L, Müller K (2022). _dplyr: A Grammar of Data Manipulation_. R package version 1.0.10, <https://CRAN.R-project.org/package=dplyr>.
## - Wickham H, Girlich M (2022). _tidyr: Tidy Messy Data_. R package version 1.2.1, <https://CRAN.R-project.org/package=tidyr>.
## - Wickham H, Henry L (2022). _purrr: Functional Programming Tools_. R package version 1.0.0, <https://CRAN.R-project.org/package=purrr>.
## - Wickham H, Hester J, Bryan J (2022). _readr: Read Rectangular Text Data_. R package version 2.1.3, <https://CRAN.R-project.org/package=readr>.
## - Zeileis A, Grothendieck G (2005). "zoo: S3 Infrastructure for Regular and Irregular Time Series." _Journal of Statistical Software_, *14*(6), 1-27. doi:10.18637/jss.v014.i06 <https://doi.org/10.18637/jss.v014.i06>.
## - Zhu H (2021). _kableExtra: Construct Complex Table with 'kable' and Pipe Syntax_. R package version 1.3.4, <https://CRAN.R-project.org/package=kableExtra>.
All credit goes to the great Dani Cosme for teaching me how to make
these! You can find her github
here!